Goto

Collaborating Authors

 matt smith foundry


I started 'vibe coding' my own apps with AI and I'm loving it

PCWorld

I've always had an interest in programming, because I've always had an interest in computers. I put together websites in HTML as a teenager (which, yes, were hosted on GeoCities) and have been occasionally dabbling in Python since. Yet none of my projects got very far and, apart from my early websites, I never made anything useful. My efforts all followed a familiar pattern: I'd fixate on a particular resource--like an O'Reilly book or an online course--and get started with great enthusiasm, but as I'd realize I was months or years away from creating anything remotely useful, I'd give up. That changed in late 2024 when my general frustration with WordPress, which I was using for my personal website, got the better of me. In a fit, I threw my website's content plus a screenshot of it into Claude 3.5 Sonnet and asked the AI to replicate my site with HTML, CSS, and JavaScript.


I tried running AI chatbots locally on my laptop -- and they kinda suck

PCWorld

Newer open LLMs often brag about big benchmark improvements, and that was certainly the case with DeepSeek-R1, which came close to OpenAI's o1 in some benchmarks. But the model you run on your Windows laptop isn't the same one that's scoring high marks. This simple question--and the LLM's rambling answer--shows how smaller models can easily go off the rails. They frequently fail to notice context or pick up on nuances that should seem obvious. In fact, recent research suggests that less intelligent large language models with reasoning capabilities are prone to such faults.